5 research outputs found

    Dynamical biomarkers in teams and other multiagent systems

    Get PDF
    Effective team behavior in high-performance environments such as in sport and the military requires individual team members to efficiently perceive the unfolding task events, predict the actions and action intents of the other team members, and plan and execute their own actions to simultaneously accomplish individual and collective goals. To enhance team performance through effective cooperation, it is crucial to measure the situation awareness and dynamics of each team member and how they collectively impact the team's functioning. Further, to be practically useful for real-life settings, such measures must be easily obtainable from existing sensors. This paper presents several methodologies that can be used on positional and movement acceleration data of team members to quantify and/or predict team performance, assess situation awareness, and to help identify task-relevant information to support individual decision-making. Given the limited reporting of these methods within military cohorts, these methodologies are described using examples from team sports and teams training in virtual environments, with discussion as to how they can be applied to real-world military teams.</p

    Predicting and understanding human action decisions during skillful joint-action using supervised machine learning and explainable-AI

    No full text
    Abstract This study investigated the utility of supervised machine learning (SML) and explainable artificial intelligence (AI) techniques for modeling and understanding human decision-making during multiagent task performance. Long short-term memory (LSTM) networks were trained to predict the target selection decisions of expert and novice players completing a multiagent herding task. The results revealed that the trained LSTM models could not only accurately predict the target selection decisions of expert and novice players but that these predictions could be made at timescales that preceded a player’s conscious intent. Importantly, the models were also expertise specific, in that models trained to predict the target selection decisions of experts could not accurately predict the target selection decisions of novices (and vice versa). To understand what differentiated expert and novice target selection decisions, we employed the explainable-AI technique, SHapley Additive explanation (SHAP), to identify what informational features (variables) most influenced modelpredictions. The SHAP analysis revealed that experts were more reliant on information about target direction of heading and the location of coherders (i.e., other players) compared to novices. The implications and assumptions underlying the use of SML and explainable-AI techniques for investigating and understanding human decision-making are discussed
    corecore